Goto

Collaborating Authors

 test bed


SPICE-HL3: Single-Photon, Inertial, and Stereo Camera dataset for Exploration of High-Latitude Lunar Landscapes

Rodríguez-Martínez, David, van der Meer, Dave, Song, Junlin, Bera, Abishek, Pérez-del-Pulgar, C. J., Olivares-Mendez, Miguel Angel

arXiv.org Artificial Intelligence

Exploring high-latitude lunar regions presents an extremely challenging visual environment for robots. The low sunlight elevation angle and minimal light scattering result in a visual field dominated by a high dynamic range featuring long, dynamic shadows. Reproducing these conditions on Earth requires sophisticated simulators and specialized facilities. We introduce a unique dataset recorded at the LunaLab from the SnT - University of Luxembourg, an indoor test facility designed to replicate the optical characteristics of multiple lunar latitudes. Our dataset includes images, inertial measurements, and wheel odometry data from robots navigating seven distinct trajectories under multiple illumination scenarios, simulating high-latitude lunar conditions from dawn to night time with and without the aid of headlights, resulting in 88 distinct sequences containing a total of 1.3M images. Data was captured using a stereo RGB-inertial sensor, a monocular monochrome camera, and for the first time, a novel single-photon avalanche diode (SPAD) camera. We recorded both static and dynamic image sequences, with robots navigating at slow (5 cm/s) and fast (50 cm/s) speeds. All data is calibrated, synchronized, and timestamped, providing a valuable resource for validating perception tasks from vision-based autonomous navigation to scientific imaging for future lunar missions targeting high-latitude regions or those intended for robots operating across perceptually degraded environments. The dataset can be downloaded from https://zenodo.org/records/13970078?preview=1, and a visual overview is available at https://youtu.be/d7sPeO50_2I. All supplementary material can be found at https://github.com/spaceuma/spice-hl3.


The POLAR Traverse Dataset: A Dataset of Stereo Camera Images Simulating Traverses across Lunar Polar Terrain under Extreme Lighting Conditions

Hansen, Margaret, Wong, Uland, Fong, Terrence

arXiv.org Artificial Intelligence

Abstract-- We present the POLAR Traverse Dataset: a dataset of high-fidelity stereo pair images of lunar-like terrain under polar lighting conditions designed to simulate a straightline traverse. Images from individual traverses with different camera heights and pitches were recorded at 1 m intervals by moving a suspended stereo bar across a test bed filled with regolith simulant and shaped to mimic lunar south polar terrain. Ground truth geometry and camera position information was also recorded. This dataset is intended for developing and testing software algorithms that rely on stereo or monocular camera images, such as visual odometry, for use in the lunar polar environment, as well as to provide insight into the expected lighting conditions in lunar polar regions. The lunar south polar region is of particular interest to upcoming NASA missions such as the Volatiles Investigating Figure 1: Hardware setup extended over SSERVI test bed with Polar Exploration Rover (VIPER) due to the existence of lunar terrain and lighting.


Optimal Control of Connected Automated Vehicles with Event-Triggered Control Barrier Functions: a Test Bed for Safe Optimal Merging

Sabouni, Ehsan, Ahmad, H. M. Sabbir, Xiao, Wei, Cassandras, Christos G., Li, Wenchao

arXiv.org Artificial Intelligence

We address the problem of controlling Connected and Automated Vehicles (CAVs) in conflict areas of a traffic network subject to hard safety constraints. It has been shown that such problems can be solved through a combination of tractable optimal control problems and Control Barrier Functions (CBFs) that guarantee the satisfaction of all constraints. These solutions can be reduced to a sequence of Quadratic Programs (QPs) which are efficiently solved on line over discrete time steps. However, guaranteeing the feasibility of the CBF-based QP method within each discretized time interval requires the careful selection of time steps which need to be sufficiently small. This creates computational requirements and communication rates between agents which may hinder the controller's application to real CAVs. In this paper, we overcome this limitation by adopting an event-triggered approach for CAVs in a conflict area such that the next QP is triggered by properly defined events with a safety guarantee. We present a laboratory-scale test bed we have developed to emulate merging roadways using mobile robots as CAVs which can be used to demonstrate how the event-triggered scheme is computationally efficient and can handle measurement uncertainties and noise compared to time-driven control while guaranteeing safety.


Mobility Analysis of Screw-Based Locomotion and Propulsion in Various Media

Lim, Jason, Joyce, Calvin, Peiros, Elizabeth, Yeoh, Mingwei, Gavrilov, Peter V., Wickenhiser, Sara G., Schreiber, Dimitri A., Richter, Florian, Yip, Michael C.

arXiv.org Artificial Intelligence

Robots "in-the-wild" encounter and must traverse widely varying terrain, ranging from solid ground to granular materials like sand to full liquids. Numerous approaches exist, including wheeled and legged robots, each excelling in specific domains. Screw-based locomotion is a promising approach for multi-domain mobility, leveraged in exploratory robotic designs, including amphibious vehicles and snake robotics. However, unlike other forms of locomotion, there is a limited exploration of the models, parameter effects, and efficiency for multi-terrain Archimedes screw locomotion. In this work, we present work towards this missing component in understanding screw-based locomotion: comprehensive experimental results and performance analysis across different media. We designed a mobile test bed for indoor and outdoor experimentation to collect this data. Beyond quantitatively showing the multi-domain mobility of screw-based locomotion, we envision future researchers and engineers using the presented results to design effective screw-based locomotion systems.


P4P: Conflict-Aware Motion Prediction for Planning in Autonomous Driving

Sun, Qiao, Huang, Xin, Williams, Brian C., Zhao, Hang

arXiv.org Artificial Intelligence

Motion prediction is crucial in enabling safe motion planning for autonomous vehicles in interactive scenarios. It allows the planner to identify potential conflicts with other traffic agents and generate safe plans. Existing motion predictors often focus on reducing prediction errors, yet it remains an open question on how well they help identify the conflicts for the planner. In this paper, we evaluate state-of-the-art predictors through novel conflict-related metrics, such as the success rate of identifying conflicts. Surprisingly, the predictors suffer from a low success rate and thus lead to a large percentage of collisions when we test the prediction-planning system in an interactive simulator. To fill the gap, we propose a simple but effective alternative that combines a physics-based trajectory generator and a learning-based relation predictor to identify conflicts and infer conflict relations. We demonstrate that our predictor, P4P, achieves superior performance over existing learning-based predictors in realistic interactive driving scenarios from Waymo Open Motion Dataset.


Flinders University Is Testing a Driverless Shuttle Bus On Campus

#artificialintelligence

An autonomous shuttle bus is currently being tested at Flinders University and has now entered the second stage of its trial. Dubbed the "Flinders University Express Shuttle" (FLEX), the bus can carry 11 seated passengers. It operates on a 2.8km route and is described as a "test bed" for the future of autonomous vehicles in South Australia. In what continues to be one of Australia's only public autonomous vehicle testing programs, the Flinders University autonomous shuttle bus travels around the Tonsely innovation district, between the train station, the residential village, the university and the TAFE. It's a walking distance route, but keep in mind that it's only a trial at the moment.


Yap

AAAI Conferences

Path planning is a critical part of modern computer games; rare is the game where nothing moves and path planning is unneeded. A* is the workhorse for most path planning applications. Block A* is a state-of-the-art algorithm that is always faster than A* in experiments using game maps. Unlike other methods that improve upon A*'s performance, Block A* is never worse than A* nor require any knowledge of the map. In our experiments, Block A* is ideal for games with randomly generated maps, large maps, or games with a highly dynamic multi-agent environment. Furthermore, in the domain of grid-based any-angle path planning, we show that Block A* is an order of magnitude faster than the previous best any-angle path planning algorithm, Theta*. We empirically show our results using maps from Dragon Age: Origins and Starcraft. Finally, we introduce populated game maps'' as a new test bed that is a better approximation of real game conditions than the standard test beds of this field. The main contributions of this paper is a more rigorous set of experiments for Block A*, and introducing a new test bed (populated game maps) that is a more accurate representation of actual game conditions than the standard test beds.


MixMOOD: A systematic approach to class distribution mismatch in semi-supervised learning using deep dataset dissimilarity measures

Calderon-Ramirez, Saul, Oala, Luis, Torrents-Barrena, Jordina, Yang, Shengxiang, Moemeni, Armaghan, Samek, Wojciech, Molina-Cabello, Miguel A.

arXiv.org Machine Learning

In this work, we propose MixMOOD - a systematic approach to mitigate effect of class distribution mismatch in semi-supervised deep learning (SSDL) with MixMatch. This work is divided into two components: (i) an extensive out of distribution (OOD) ablation test bed for SSDL and (ii) a quantitative unlabelled dataset selection heuristic referred to as MixMOOD. In the first part, we analyze the sensitivity of MixMatch accuracy under 90 different distribution mismatch scenarios across three multi-class classification tasks. These are designed to systematically understand how OOD unlabelled data affects MixMatch performance. In the second part, we propose an efficient and effective method, called deep dataset dissimilarity measures (DeDiMs), to compare labelled and unlabelled datasets. The proposed DeDiMs are quick to evaluate and model agnostic. They use the feature space of a generic Wide-ResNet and can be applied prior to learning. Our test results reveal that supposed semantic similarity between labelled and unlabelled data is not a good heuristic for unlabelled data selection. In contrast, strong correlation between MixMatch accuracy and the proposed DeDiMs allow us to quantitatively rank different unlabelled datasets ante hoc according to expected MixMatch accuracy. This is what we call MixMOOD. Furthermore, we argue that the MixMOOD approach can aid to standardize the evaluation of different semi-supervised learning techniques under real world scenarios involving out of distribution data.


Innovation Center Keeps Getting Better - Connected World

#artificialintelligence

How exciting is our industry right now? You might say I'm easily excited, and I'd probably agree, because I just love this stuff, especially when we are talking manufacturing and more. But really, if you take a look at what's happening with AI (artificial intelligence), machine learning, robotics, automation, 5G, blockchain, autonomous vehicles, AR (augmented reality), VR (virtual reality), and all of their enterprise applications … it is truly a great time to be in tech. Innovation is our lifeblood in this industry. As you and I both know, tech often seems to happen at the speed of light.


Expert: Important for Singapore to have range of data

#artificialintelligence

Singapore's plan to put autonomous vehicles (AVs) on the roads here was praised by global robotics expert Ayanna Howard, who raised questions about the way AV tests are currently conducted overseas. A former Nasa researcher named by Forbes magazine as one of the top 50 women in tech in the United States, Dr Howard also said there is a need to consider the human factor in testing AVs. "A lot of times, with self-driving cars, the issue is that other cars are driven by humans. Most humans don't always follow traffic rules exactly. In a test done in California, drivers would stop, then go, because they wanted to see what the self-driving car would do," said the 47-year-old professor and chair of interactive computing at Georgia Tech in the US.